20 - Recap Clip 4.7: Conclusion [ID:30422]
36 von 36 angezeigt

We are talking about very simple decision theory.

Remember, the setting is we have the usual state-based agent.

State-based meaning we have a world model and by now that's a Bayesian network.

We complemented that with something we don't really understand yet, which is a decision

model.

One of the things that we looked at was essentially to look at utilities.

We want to have a utility-based agent and we want utilities rather than goals to drive

the decisions.

We also finished Bayesian networks last week and we looked at how to construct Bayesian

networks in practice and how to reason on these networks.

It turns out, as it always does, if the topology of our world model has good properties, which

usually means something like tree-like.

In AI we love trees.

Here we can even tolerate polytrees, which is not that much different.

If the structures we're working with, the dependency structures we're working with,

are somehow tree-like, we're expecting by now polynomial inference.

It was the same thing with the constraint networks.

You can actually even do it for DPLL and even partially first order logic.

If the dependencies are well behaved, then you can do stuff polynomially.

If it isn't, then you're looking at at least exponential, maybe worse, maybe even undecidable.

In this case here, everything is decidable, but that doesn't help us in practice because

things are getting so big that it could just as well be undecidable.

That's really what we did.

What we didn't do is all the fun stuff where we get around the hard cases.

What you can do is inference by sampling.

If you have these non-polytree, meaning cyclic, parts of your graph, you can just cluster

that automatically into derived random variables, which makes inference go faster, but of course

results worse because we are taking an approximative worse world model.

You can always compile onto SAT.

Anything that's decidable empirically, you can compile into SAT.

That is efficient.

Then of course you can do dynamic Bayesian networks.

We're going to get into that a little bit later.

Then you can go away from propositional logic into something more interesting.

Some of that is actually nicely described in the third edition of Russell Norwick.

Nice good reading if you're interested in these things.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:04:57 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 10:48:06

Sprache

en-US

Recap: Conclusion

Main video on the topic in chapter 4 clip 7.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen